Goto

Collaborating Authors

 standard representation


A Omitted Proofs

Neural Information Processing Systems

In this section we include all of the proofs omitted from the main body. For the convenience of the reader, we will restate each claim before proceeding with its proof. A.1 Preliminary Proofs We commence with the proof of Proposition 1. Proposition 1. F or any η 0 and at all times t N, the OFTRL optimization problem on Line 3 of Algorithm 1 admits a unique optimal solution (λ Uniqueness follow immediately from strict convexity. In the rest of the proof we focus on the existence part. We start by showing that there exists a point x X whose coordinates are all strictly positive.






2438d634f0ed1640934d31376c110a92-Supplemental-Conference.pdf

Neural Information Processing Systems

In this section we briefly introduce the representation theory of the three groups we used in this work. The complex irreducible representations are often used and correspond to the circular harmonics. The parallel transport operator transports vector fields defined over a space. These invariant subspaces can be identified as follows. SO(2) ambiguity we introduced in Section 2 lies.




Same or Different? Diff-Vectors for Authorship Analysis

Corbara, Silvia, Moreo, Alejandro, Sebastiani, Fabrizio

arXiv.org Artificial Intelligence

We investigate the effects on authorship identification tasks of a fundamental shift in how to conceive the vectorial representations of documents that are given as input to a supervised learner. In ``classic'' authorship analysis a feature vector represents a document, the value of a feature represents (an increasing function of) the relative frequency of the feature in the document, and the class label represents the author of the document. We instead investigate the situation in which a feature vector represents an unordered pair of documents, the value of a feature represents the absolute difference in the relative frequencies (or increasing functions thereof) of the feature in the two documents, and the class label indicates whether the two documents are from the same author or not. This latter (learner-independent) type of representation has been occasionally used before, but has never been studied systematically. We argue that it is advantageous, and that in some cases (e.g., authorship verification) it provides a much larger quantity of information to the training process than the standard representation. The experiments that we carry out on several publicly available datasets (among which one that we here make available for the first time) show that feature vectors representing pairs of documents (that we here call Diff-Vectors) bring about systematic improvements in the effectiveness of authorship identification tasks, and especially so when training data are scarce (as it is often the case in real-life authorship identification scenarios). Our experiments tackle same-author verification, authorship verification, and closed-set authorship attribution; while DVs are naturally geared for solving the 1st, we also provide two novel methods for solving the 2nd and 3rd that use a solver for the 1st as a building block.


Near-Optimal No-Regret Learning Dynamics for General Convex Games

Farina, Gabriele, Anagnostides, Ioannis, Luo, Haipeng, Lee, Chung-Wei, Kroer, Christian, Sandholm, Tuomas

arXiv.org Artificial Intelligence

A recent line of work has established uncoupled learning dynamics such that, when employed by all players in a game, each player's \emph{regret} after $T$ repetitions grows polylogarithmically in $T$, an exponential improvement over the traditional guarantees within the no-regret framework. However, so far these results have only been limited to certain classes of games with structured strategy spaces -- such as normal-form and extensive-form games. The question as to whether $O(\text{polylog} T)$ regret bounds can be obtained for general convex and compact strategy sets -- which occur in many fundamental models in economics and multiagent systems -- while retaining efficient strategy updates is an important question. In this paper, we answer this in the positive by establishing the first uncoupled learning algorithm with $O(\log T)$ per-player regret in general \emph{convex games}, that is, games with concave utility functions supported on arbitrary convex and compact strategy sets. Our learning dynamics are based on an instantiation of optimistic follow-the-regularized-leader over an appropriately \emph{lifted} space using a \emph{self-concordant regularizer} that is, peculiarly, not a barrier for the feasible region. Further, our learning dynamics are efficiently implementable given access to a proximal oracle for the convex strategy set, leading to $O(\log\log T)$ per-iteration complexity; we also give extensions when access to only a \emph{linear} optimization oracle is assumed. Finally, we adapt our dynamics to guarantee $O(\sqrt{T})$ regret in the adversarial regime. Even in those special cases where prior results apply, our algorithm improves over the state-of-the-art regret bounds either in terms of the dependence on the number of iterations or on the dimension of the strategy sets.